6 research outputs found
Optic flow and the metric of the visual ground plane
AbstractA theory is developed in which the optic flow of an observer translating over the ground plane determines the metric of egocentric visual space. Optic flow is used to operationalize the equality of spatial intervals not unlike physicists use time to compare spatial intervals. The theory predicts empirical matching ratios for collinear, sagittal intervals to within 2% of the mean (eight subjects, standard error also 2%). The theory predicts that frontoparallel intervals on the ground plane will match sagittal intervals if their relative image motions match, which was found empirically. It is suggested that the optic flow metric serves to calibrate static depth cues such as angular elevation and binocular parallax
Recommended from our members
Visual perception of solid shape from occluding contours
The relative motion of object and observer induces a motion field in the observer's visual image that is smooth everywhere except along the object's occluding contours. Thus, occluding contours and smooth motion fields can be viewed as complementary and as separate sources of information about an object's shape. I studied how the human visual system perceives solid shape from the occluding contours of rotating objects and from the smooth motion field induced by moving planar surface patches.I propose a three-stage model for the perception of solid shape from the occluding contours of a rotating object. First, the object's motion is determined. I argue that this is only possible using points of correspondence and only when the object's axis of rotation is frontoparallel. In the second stage, the motion field along the contour is used to compute relative depth and surface curvature along the rim, the contour's pre-image. Third, local shape descriptors are propagated inside the figure to yield a global percept of solid shape. To determine which shape descriptors are computed by human subjects, I used a novel task in which subjects have to discriminate between flat ellipses and solid ellipsoids with varying thickness. I found that discriminability is proportional to the inverse of radial curvature but is not proportional to Gaussian or mean curvature. Certain slants of the axis of rotation decrease discriminability. Subjects who could discriminate ellipsoids and ellipses perceived the ellipsoids' angular velocity more veridically than did subjects who could not discriminate the two.Any smooth motion field can locally be described by divergence, curl, and deformation. If the motion field is induced by a rotating plane, the amount of deformation is proportional to the plane's slant and its angular velocity. Similarly, for translating planes, deformation is proportional to slant and image motion. Slant judgments of human observers were to a first-order approximation proportional to deformation per se, that is, observers do not take object motion into account. Recent psychophysical evidence suggests that human subjects need motion discontinuities for this. Thus, contours might be necessary to correctly perceive slant from smooth motion fields
Recommended from our members
Visual recognition of objects : behavioral, computational, and neurobiological aspects
I surveyed work on visual object recognition and perception. In animals, vision has been studied mainly on the behavioral and neurobiological levels. Behavioral data typically show what the visual system, by itself or together with the rest of the organism, is capable of. They show, for example, that humans can recognie objects regardless of size and position, but that rotated objects pose problems. Important insights into the organization of behavior have also been provided by people who suffered localized brain damage. We have learned that the brain is divided into areas subserving different and relatively well-defined behaviors. The visual system itself is also organized in different subsystems; the visual cortex alone contains nearly twenty maps of the visual field. And individual neurons respond selectively to visual stimuli, e.g., the orientation of line segments, color, direction of motion, and, most intriguingly, faces. The question is how the actions of all these neurons produce the behavior we observe. How do neurons represent the shape of objects such that they can be recognized? Before we can answer the question, we have to understand the computational aspect of shape representation, the nature of the problem as it were. Many methods for representing shape have been explored, mainly by computer scientists, but so far no satisfactory answers have been found
Recommended from our members
Visual recognition of objects : behavioral, computational, and neurobiological aspects
I surveyed work on visual object recognition and perception. In animals, vision has been studied mainly on the behavioral and neurobiological levels. Behavioral data typically show what the visual system, by itself or together with the rest of the organism, is capable of. They show, for example, that humans can recognie objects regardless of size and position, but that rotated objects pose problems. Important insights into the organization of behavior have also been provided by people who suffered localized brain damage. We have learned that the brain is divided into areas subserving different and relatively well-defined behaviors. The visual system itself is also organized in different subsystems; the visual cortex alone contains nearly twenty maps of the visual field. And individual neurons respond selectively to visual stimuli, e.g., the orientation of line segments, color, direction of motion, and, most intriguingly, faces. The question is how the actions of all these neurons produce the behavior we observe. How do neurons represent the shape of objects such that they can be recognized? Before we can answer the question, we have to understand the computational aspect of shape representation, the nature of the problem as it were. Many methods for representing shape have been explored, mainly by computer scientists, but so far no satisfactory answers have been found
Recommended from our members
Visual perception of solid shape from occluding contours
The relative motion of object and observer induces a motion field in the observer's visual image that is smooth everywhere except along the object's occluding contours. Thus, occluding contours and smooth motion fields can be viewed as complementary and as separate sources of information about an object's shape. I studied how the human visual system perceives solid shape from the occluding contours of rotating objects and from the smooth motion field induced by moving planar surface patches.I propose a three-stage model for the perception of solid shape from the occluding contours of a rotating object. First, the object's motion is determined. I argue that this is only possible using points of correspondence and only when the object's axis of rotation is frontoparallel. In the second stage, the motion field along the contour is used to compute relative depth and surface curvature along the rim, the contour's pre-image. Third, local shape descriptors are propagated inside the figure to yield a global percept of solid shape. To determine which shape descriptors are computed by human subjects, I used a novel task in which subjects have to discriminate between flat ellipses and solid ellipsoids with varying thickness. I found that discriminability is proportional to the inverse of radial curvature but is not proportional to Gaussian or mean curvature. Certain slants of the axis of rotation decrease discriminability. Subjects who could discriminate ellipsoids and ellipses perceived the ellipsoids' angular velocity more veridically than did subjects who could not discriminate the two.Any smooth motion field can locally be described by divergence, curl, and deformation. If the motion field is induced by a rotating plane, the amount of deformation is proportional to the plane's slant and its angular velocity. Similarly, for translating planes, deformation is proportional to slant and image motion. Slant judgments of human observers were to a first-order approximation proportional to deformation per se, that is, observers do not take object motion into account. Recent psychophysical evidence suggests that human subjects need motion discontinuities for this. Thus, contours might be necessary to correctly perceive slant from smooth motion fields